Goto

Collaborating Authors

 abusive image


AI 'Nudify' Websites Are Raking in Millions of Dollars

WIRED

For years, so-called "nudify" apps and websites have mushroomed online, allowing people to create nonconsensual and abusive images of women and girls, including child sexual abuse material. Despite some lawmakers and tech companies taking steps to limit the harmful services, every month, millions of people are still accessing the websites, and the sites' creators may be making millions of dollars each year, new research suggests. An analysis of 85 nudify and "undress" websites--which allow people to upload photos and use AI to generate "nude" pictures of the subjects with just a few clicks--has found that most of the sites rely on tech services from Google, Amazon, and Cloudflare to operate and stay online. The findings, revealed by Indicator, a publication investigating digital deception, say that the websites had a combined average of 18.5 million visitors for each of the past six months and collectively may be making up to 36 million per year. Alexios Mantzarlis, a cofounder of Indicator and an online safety researcher, says the murky nudifier ecosystem has become a "lucrative business" that "Silicon Valley's laissez-faire approach to generative AI" has allowed to persist.


High School Is Becoming a Cesspool of Sexually Explicit Deepfakes

The Atlantic - Technology

For years now, generative AI has been used to conjure all sorts of realities--dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases--perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been. This morning, the Center for Democracy and Technology, a nonprofit that advocates for digital rights and privacy, released a report on the alarming prevalence of nonconsensual intimate imagery (or NCII) in American schools. In the past school year, the center's polling found, 15 percent of high schoolers reported hearing about a "deepfake"--or AI-generated image--that depicted someone associated with their school in a sexually explicit or intimate manner.